38 research outputs found
From Pixels to Sentiment: Fine-tuning CNNs for Visual Sentiment Prediction
Visual multimedia have become an inseparable part of our digital social
lives, and they often capture moments tied with deep affections. Automated
visual sentiment analysis tools can provide a means of extracting the rich
feelings and latent dispositions embedded in these media. In this work, we
explore how Convolutional Neural Networks (CNNs), a now de facto computational
machine learning tool particularly in the area of Computer Vision, can be
specifically applied to the task of visual sentiment prediction. We accomplish
this through fine-tuning experiments using a state-of-the-art CNN and via
rigorous architecture analysis, we present several modifications that lead to
accuracy improvements over prior art on a dataset of images from a popular
social media platform. We additionally present visualizations of local patterns
that the network learned to associate with image sentiment for insight into how
visual positivity (or negativity) is perceived by the model.Comment: Accepted for publication in Image and Vision Computing. Models and
source code available at https://github.com/imatge-upc/sentiment-201
Comparing Fixed and Adaptive Computation Time for Recurrent Neural Networks
Adaptive Computation Time for Recurrent Neural Networks (ACT) is one of the
most promising architectures for variable computation. ACT adapts to the input
sequence by being able to look at each sample more than once, and learn how
many times it should do it. In this paper, we compare ACT to Repeat-RNN, a
novel architecture based on repeating each sample a fixed number of times. We
found surprising results, where Repeat-RNN performs as good as ACT in the
selected tasks. Source code in TensorFlow and PyTorch is publicly available at
https://imatge-upc.github.io/danifojo-2018-repeatrnn/Comment: Accepted as workshop paper at ICLR 201
Class-Weighted Convolutional Features for Visual Instance Search
Image retrieval in realistic scenarios targets large dynamic datasets of
unlabeled images. In these cases, training or fine-tuning a model every time
new images are added to the database is neither efficient nor scalable.
Convolutional neural networks trained for image classification over large
datasets have been proven effective feature extractors for image retrieval. The
most successful approaches are based on encoding the activations of
convolutional layers, as they convey the image spatial information. In this
paper, we go beyond this spatial information and propose a local-aware encoding
of convolutional features based on semantic information predicted in the target
image. To this end, we obtain the most discriminative regions of an image using
Class Activation Maps (CAMs). CAMs are based on the knowledge contained in the
network and therefore, our approach, has the additional advantage of not
requiring external information. In addition, we use CAMs to generate object
proposals during an unsupervised re-ranking stage after a first fast search.
Our experiments on two public available datasets for instance retrieval,
Oxford5k and Paris6k, demonstrate the competitiveness of our approach
outperforming the current state-of-the-art when using off-the-shelf models
trained on ImageNet. The source code and model used in this paper are publicly
available at http://imatge-upc.github.io/retrieval-2017-cam/.Comment: To appear in the British Machine Vision Conference (BMVC), September
201
Budget-aware Semi-Supervised Semantic and Instance Segmentation
Methods that move towards less supervised scenarios are key for image
segmentation, as dense labels demand significant human intervention. Generally,
the annotation burden is mitigated by labeling datasets with weaker forms of
supervision, e.g. image-level labels or bounding boxes. Another option are
semi-supervised settings, that commonly leverage a few strong annotations and a
huge number of unlabeled/weakly-labeled data. In this paper, we revisit
semi-supervised segmentation schemes and narrow down significantly the
annotation budget (in terms of total labeling time of the training set)
compared to previous approaches. With a very simple pipeline, we demonstrate
that at low annotation budgets, semi-supervised methods outperform by a wide
margin weakly-supervised ones for both semantic and instance segmentation. Our
approach also outperforms previous semi-supervised works at a much reduced
labeling cost. We present results for the Pascal VOC benchmark and unify weakly
and semi-supervised approaches by considering the total annotation budget, thus
allowing a fairer comparison between methods.Comment: To appear in CVPR-W 2019 (DeepVision workshop
Cultural Event Recognition with Visual ConvNets and Temporal Models
This paper presents our contribution to the ChaLearn Challenge 2015 on
Cultural Event Classification. The challenge in this task is to automatically
classify images from 50 different cultural events. Our solution is based on the
combination of visual features extracted from convolutional neural networks
with temporal information using a hierarchical classifier scheme. We extract
visual features from the last three fully connected layers of both CaffeNet
(pretrained with ImageNet) and our fine tuned version for the ChaLearn
challenge. We propose a late fusion strategy that trains a separate low-level
SVM on each of the extracted neural codes. The class predictions of the
low-level SVMs form the input to a higher level SVM, which gives the final
event scores. We achieve our best result by adding a temporal refinement step
into our classification scheme, which is applied directly to the output of each
low-level SVM. Our approach penalizes high classification scores based on
visual features when their time stamp does not match well an event-specific
temporal distribution learned from the training and validation data. Our system
achieved the second best result in the ChaLearn Challenge 2015 on Cultural
Event Classification with a mean average precision of 0.767 on the test set.Comment: Initial version of the paper accepted at the CVPR Workshop ChaLearn
Looking at People 201
Mask-guided sample selection for Semi-Supervised Instance Segmentation
Image segmentation methods are usually trained with pixel-level annotations,
which require significant human effort to collect. The most common solution to
address this constraint is to implement weakly-supervised pipelines trained
with lower forms of supervision, such as bounding boxes or scribbles. Another
option are semi-supervised methods, which leverage a large amount of unlabeled
data and a limited number of strongly-labeled samples. In this second setup,
samples to be strongly-annotated can be selected randomly or with an active
learning mechanism that chooses the ones that will maximize the model
performance. In this work, we propose a sample selection approach to decide
which samples to annotate for semi-supervised instance segmentation. Our method
consists in first predicting pseudo-masks for the unlabeled pool of samples,
together with a score predicting the quality of the mask. This score is an
estimate of the Intersection Over Union (IoU) of the segment with the ground
truth mask. We study which samples are better to annotate given the quality
score, and show how our approach outperforms a random selection, leading to
improved performance for semi-supervised instance segmentation with low
annotation budgets.Comment: Preprint submitted to Multimedia Tools and Application